233 research outputs found

    Multi-objective optimal designs in comparative clinical trials with covariates: The reinforced doubly adaptive biased coin design

    Full text link
    The present paper deals with the problem of allocating patients to two competing treatments in the presence of covariates or prognostic factors in order to achieve a good trade-off among ethical concerns, inferential precision and randomness in the treatment allocations. In particular we suggest a multipurpose design methodology that combines efficiency and ethical gain when the linear homoscedastic model with both treatment/covariate interactions and interactions among covariates is adopted. The ensuing compound optimal allocations of the treatments depend on the covariates and their distribution on the population of interest, as well as on the unknown parameters of the model. Therefore, we introduce the reinforced doubly adaptive biased coin design, namely a general class of covariate-adjusted response-adaptive procedures that includes both continuous and discontinuous randomization functions, aimed to target any desired allocation proportion. The properties of this proposal are described both theoretically and through simulations.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1007 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the almost sure convergence of adaptive allocation procedures

    Get PDF
    In this paper, we provide some general convergence results for adaptive designs for treatment comparison, both in the absence and presence of covariates. In particular, we demonstrate the almost sure convergence of the treatment allocation proportion for a vast class of adaptive procedures, also including designs that have not been formally investigated but mainly explored through simulations, such as Atkinson's optimum biased coin design, Pocock and Simon's minimization method and some of its generalizations. Even if the large majority of the proposals in the literature rely on continuous allocation rules, our results allow to prove via a unique mathematical framework the convergence of adaptive allocation methods based on both continuous and discontinuous randomization functions. Although several examples of earlier works are included in order to enhance the applicability, our approach provides substantial insight for future suggestions, especially in the absence of a prefixed target and for designs characterized by sequences of allocation rules.Comment: Published at http://dx.doi.org/10.3150/13-BEJ591 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Simulated annealing for balancing covariates

    Get PDF
    Covariate balance is one of the fundamental issues in designing experiments for treatment comparisons, especially in randomized clinical trials. In this article, we introduce a new class of covariate-adaptive procedures based on the Simulated Annealing algorithm aimed at balancing the allocations of two competing treatments across a set of pre-specified covariates. Due to the nature of the simulated annealing, these designs are intrinsically randomized, thus completely unpredictable, and very flexible: they can manage both quantitative and qualitative factors and be implemented in a static version as well as sequentially. The properties of the suggested proposal are described, showing a significant improvement in terms of covariate balance and inferential accuracy with respect to all the other procedures proposed in the literature. An illustrative example based on real data is also discussed

    New insights into adaptive enrichment designs

    Get PDF
    The transition towards personalized medicine is happening and the new experimental framework is raising several challenges, from a clinical, ethical, logistical, regulatory, and statistical perspective. To face these challenges, innovative study designs with increasing complexity have been proposed. In particular, adaptive enrichment designs are becoming more attractive for their flexibility. However, these procedures rely on an increasing number of parameters that are unknown at the planning stage of the clinical trial, so the study design requires particular care. This review is dedicated to adaptive enrichment studies with a focus on design aspects. While many papers deal with methods for the analysis, the sample size determination and the optimal allocation problem have been overlooked. We discuss the multiple aspects involved in adaptive enrichment designs that contribute to their advantages and disadvantages. The decision-making process of whether or not it is worth enriching should be driven by clinical and ethical considerations as well as scientific and statistical concerns

    A simple solution to the inadequacy of asymptotic likelihood-based inference for response-adaptive clinical trials

    Get PDF
    The present paper discusses drawbacks and limitations of likelihood-based inference in sequential clinical trials for treatment comparisons managed viaResponse-Adaptive Randomization. Taking into account the most common statistical models for the primary outcome—namely binary, Poisson, exponential and normal data—we derive the conditions under which (i) the classical confidence intervals degenerate and (ii) the Wald test becomes inconsistent and strongly affected by the nuisance parameters, also displaying a non monotonic power. To overcome these drawbacks, we provide a very simple solution that could preserve the fundamental properties of likelihood-based inference. Several illustrative examples and simulation studies are presented in order to confirm the relevance of our results and provide some practical recommendations

    Reporting only relative effect measures was potentially misleading: some good practices for improving the soundness of epidemiological results

    Get PDF
    Objective: In the medical and epidemiological literature there is a growing tendency to report an excessive number of decimal digits (often three, sometimes four), especially when measures of relative occurrence are small; this can be misleading. Study Design and Setting: We combined mathematical and statistical reasoning about the precision of relative risks with the meaning of the decimal part of the same measures from biological and public health perspectives. Results: We identified a general rule for minimizing the mathematical error due to rounding of relative risks, depending on the background absolute rate, which justifies the use of one or more decimal digits for estimates close to 1. Conclusions: We suggest that both relative and absolute risk measures (expressed as a rates) should be reported, and two decimal digits should be used for relative risk close to 1 only if the background rate is at least 1/1,000 py. The use of more than two decimal digits is justified only when the background rate is high (i.e., 1/10 py)

    Optimal design for correlated processes with input-dependent noise

    Get PDF
    Optimal design for parameter estimation in Gaussian process regression models with input-dependent noise is examined. The motivation stems from the area of computer experiments, where computationally demanding simulators are approximated using Gaussian process emulators to act as statistical surrogates. In the case of stochastic simulators, which produce a random output for a given set of model inputs, repeated evaluations are useful, supporting the use of replicate observations in the experimental design. The findings are also applicable to the wider context of experimental design for Gaussian process regression and kriging. Designs are proposed with the aim of minimising the variance of the Gaussian process parameter estimates. A heteroscedastic Gaussian process model is presented which allows for an experimental design technique based on an extension of Fisher information to heteroscedastic models. It is empirically shown that the error of the approximation of the parameter variance by the inverse of the Fisher information is reduced as the number of replicated points is increased. Through a series of simulation experiments on both synthetic data and a systems biology stochastic simulator, optimal designs with replicate observations are shown to outperform space-filling designs both with and without replicate observations. Guidance is provided on best practice for optimal experimental design for stochastic response models

    A computer-aided methodology for the optimization of electrostatic separation processes in recycling

    Get PDF
    The rapid growth of technological products has led to an increasing volume of waste electrical and electronic equipments (WEEE), which could represent a valuable source of critical raw materials. However, current mechanical separation processes for recycling are typically poorly operated, making it impossible to modify the process parameters as a function of the materials under treatment, thus resulting in untapped separation potentials. Corona electrostatic separation (CES) is one of the most popular processes for separating fine metal and nonmetal particles derived from WEEE. In order to optimize the process operating conditions (i.e., variables) for a given multi-material mixture under treatment, several technological and economical criteria should be jointly considered. This translates into a complex optimization problem that can be hardly solved by a purely experimental approach. As a result, practitioners tend to assign process parameters by few experiments based on a small material sample and to keep these parameters fixed during the process life-cycle. The use of computer experiments for parameter optimization is a mostly unexplored area in this field. In this work, a computer-aided approach is proposed to the problem of optimizing the operational parameters in CES processes. Three metamodels, developed starting from a multi-body simulation model of the process physics, are presented and compared by means of a numerical and simulation study. Our approach proves to be an effective framework to optimize the CES process performance. Furthermore, by comparing the predicted response surfaces of the metamodels, additional insight into the process behavior over the operating region is obtained
    • …
    corecore